explicit explore-exploit algorithm
Explicit Explore-Exploit Algorithms in Continuous State Spaces
We present a new model-based algorithm for reinforcement learning (RL) which consists of explicit exploration and exploitation phases, and is applicable in large or infinite state spaces. The algorithm maintains a set of dynamics models consistent with current experience and explores by finding policies which induce high disagreement between their state predictions. It then exploits using the refined set of models or experience gathered during exploration. We show that under realizability and optimal planning assumptions, our algorithm provably finds a near-optimal policy with a number of samples that is polynomial in a structural complexity measure which we show to be low in several natural settings. We then give a practical approximation using neural networks and demonstrate its performance and sample efficiency in practice.
Reviews: Explicit Explore-Exploit Algorithms in Continuous State Spaces
I have read the authors feedback and other reviews. I'll keep my original score. The agent is able to collect data when running the algorithm and the goal is to find a near optimal policy. Using ranks of error matrices to represent complexity and some proof techniques are related to [18] and [41]. The paper is technically sound and clearly written. In the theoretical side, the authors prove a polynomial sample complexity bound in terms of A, H, and the rank of the model misfit matrix, thus avoiding the dependence on S .
Explicit Explore-Exploit Algorithms in Continuous State Spaces
We present a new model-based algorithm for reinforcement learning (RL) which consists of explicit exploration and exploitation phases, and is applicable in large or infinite state spaces. The algorithm maintains a set of dynamics models consistent with current experience and explores by finding policies which induce high dis- agreement between their state predictions. It then exploits using the refined set of models or experience gathered during exploration. We show that under realizability and optimal planning assumptions, our algorithm provably finds a near-optimal policy with a number of samples that is polynomial in a structural complexity measure which we show to be low in several natural settings. We then give a practical approximation using neural networks and demonstrate its performance and sample efficiency in practice.
Explicit Explore-Exploit Algorithms in Continuous State Spaces
We present a new model-based algorithm for reinforcement learning (RL) which consists of explicit exploration and exploitation phases, and is applicable in large or infinite state spaces. The algorithm maintains a set of dynamics models consistent with current experience and explores by finding policies which induce high dis- agreement between their state predictions. It then exploits using the refined set of models or experience gathered during exploration. We show that under realizability and optimal planning assumptions, our algorithm provably finds a near-optimal policy with a number of samples that is polynomial in a structural complexity measure which we show to be low in several natural settings. We then give a practical approximation using neural networks and demonstrate its performance and sample efficiency in practice.